42 research outputs found

    Extension des modèles de prédiction de la qualité du logiciel en utilisant la logique floue et les heuristiques du domaine

    Full text link
    Mémoire numérisé par la Direction des bibliothèques de l'Université de Montréal

    Facial image pre-processing and emotion classification: A deep learning approach

    Get PDF
    © 2019 IEEE. Facial emotion detection and expressions are vital for applications that require credibility assessment, evaluating truthfulness, and detection of deception. However, most of the research reveal low accuracy in emotion detection mainly due to the low quality of images under consideration. Conducting intensive pre-processing activities and using artificial intelligence especially deep learning techniques are increasing accuracy in computational predictions. Our research focuses on emotion detection using deep learning techniques and combined preprocessing activities. We propose a solution that applies and compares four deep learning models for image pre-processing with the main objective to improve emotion recognition accuracy. Our methodology includes three major stages in the data value chain, pre-processing, deep learning and post-processing. We evaluate the proposed scheme on a real facial data set, namely Facial Image Data of Indian Film Stars for our study. The experimentation compares the performance of various deep learning techniques on the facial image data and confirms that our approach enhanced significantly the image quality using intensive pre-processing and deep-learning, improves accuracy in emotion prediction

    Big data quality framework: a holistic approach to continuous quality management

    Get PDF
    Big Data is an essential research area for governments, institutions, and private agencies to support their analytics decisions. Big Data refers to all about data, how it is collected, processed, and analyzed to generate value-added data-driven insights and decisions. Degradation in Data Quality may result in unpredictable consequences. In this case, confidence and worthiness in the data and its source are lost. In the Big Data context, data characteristics, such as volume, multi-heterogeneous data sources, and fast data generation, increase the risk of quality degradation and require efficient mechanisms to check data worthiness. However, ensuring Big Data Quality (BDQ) is a very costly and time-consuming process, since excessive computing resources are required. Maintaining Quality through the Big Data lifecycle requires quality profiling and verification before its processing decision. A BDQ Management Framework for enhancing the pre-processing activities while strengthening data control is proposed. The proposed framework uses a new concept called Big Data Quality Profile. This concept captures quality outline, requirements, attributes, dimensions, scores, and rules. Using Big Data profiling and sampling components of the framework, a faster and efficient data quality estimation is initiated before and after an intermediate pre-processing phase. The exploratory profiling component of the framework plays an initial role in quality profiling; it uses a set of predefined quality metrics to evaluate important data quality dimensions. It generates quality rules by applying various pre-processing activities and their related functions. These rules mainly aim at the Data Quality Profile and result in quality scores for the selected quality attributes. The framework implementation and dataflow management across various quality management processes have been discussed, further some ongoing work on framework evaluation and deployment to support quality evaluation decisions conclude the paper

    Empowering Patient Similarity Networks through Innovative Data-Quality-Aware Federated Profiling

    Get PDF
    Continuous monitoring of patients involves collecting and analyzing sensory data from a multitude of sources. To overcome communication overhead, ensure data privacy and security, reduce data loss, and maintain efficient resource usage, the processing and analytics are moved close to where the data are located (e.g., the edge). However, data quality (DQ) can be degraded because of imprecise or malfunctioning sensors, dynamic changes in the environment, transmission failures, or delays. Therefore, it is crucial to keep an eye on data quality and spot problems as quickly as possible, so that they do not mislead clinical judgments and lead to the wrong course of action. In this article, a novel approach called federated data quality profiling (FDQP) is proposed to assess the quality of the data at the edge. FDQP is inspired by federated learning (FL) and serves as a condensed document or a guide for node data quality assurance. The FDQP formal model is developed to capture the quality dimensions specified in the data quality profile (DQP). The proposed approach uses federated feature selection to improve classifier precision and rank features based on criteria such as feature value, outlier percentage, and missing data percentage. Extensive experimentation using a fetal dataset split into different edge nodes and a set of scenarios were carefully chosen to evaluate the proposed FDQP model. The results of the experiments demonstrated that the proposed FDQP approach positively improved the DQ, and thus, impacted the accuracy of the federated patient similarity network (FPSN)-based machine learning models. The proposed data-quality-aware federated PSN architecture leveraging FDQP model with data collected from edge nodes can effectively improve the data quality and accuracy of the federated patient similarity network (FPSN)-based machine learning models. Our profiling algorithm used lightweight profile exchange instead of full data processing at the edge, which resulted in optimal data quality achievement, thus improving efficiency. Overall, FDQP is an effective method for assessing data quality in the edge computing environment, and we believe that the proposed approach can be applied to other scenarios beyond patient monitoring

    Trustworthy Federated Learning: A Survey

    Full text link
    Federated Learning (FL) has emerged as a significant advancement in the field of Artificial Intelligence (AI), enabling collaborative model training across distributed devices while maintaining data privacy. As the importance of FL increases, addressing trustworthiness issues in its various aspects becomes crucial. In this survey, we provide an extensive overview of the current state of Trustworthy FL, exploring existing solutions and well-defined pillars relevant to Trustworthy . Despite the growth in literature on trustworthy centralized Machine Learning (ML)/Deep Learning (DL), further efforts are necessary to identify trustworthiness pillars and evaluation metrics specific to FL models, as well as to develop solutions for computing trustworthiness levels. We propose a taxonomy that encompasses three main pillars: Interpretability, Fairness, and Security & Privacy. Each pillar represents a dimension of trust, further broken down into different notions. Our survey covers trustworthiness challenges at every level in FL settings. We present a comprehensive architecture of Trustworthy FL, addressing the fundamental principles underlying the concept, and offer an in-depth analysis of trust assessment mechanisms. In conclusion, we identify key research challenges related to every aspect of Trustworthy FL and suggest future research directions. This comprehensive survey serves as a valuable resource for researchers and practitioners working on the development and implementation of Trustworthy FL systems, contributing to a more secure and reliable AI landscape.Comment: 45 Pages, 8 Figures, 9 Table

    Novel cloud and SOA-based framework for E-health monitoring using wireless biosensors

    Get PDF
    Various and independent studies are showing that an exponential increase of chronic diseases (CDs) is exhausting governmental and private healthcare systems to an extent that some countries allocate half of their budget to healthcare systems. To benefit from the IT development, e-health monitoring and prevention approaches revealed to be among top promising solutions. In fact, well-implemented monitoring and prevention schemes have reported a decent reduction of CDs risk and have narrowed their effects, on both patients\u27 health conditions and on government budget spent on healthcare. In this paper, we propose a framework to collect patients\u27 data in real time, perform appropriate nonintrusive monitoring, and propose medical and/or life style engagements, whenever needed and appropriate. The framework, which relies on service-oriented architecture (SOA) and the Cloud, allows a seamless integration of different technologies, applications, and services. It also integrates mobile technologies to smoothly collect and communicate vital data from a patient\u27s wearable biosensors while considering the mobile devices\u27 limited capabilities and power drainage in addition to intermittent network disconnections. Then, data are stored in the Cloud and made available via SOA to allow easy access by physicians, paramedics, or any other authorized entity. A case study has been developed to evaluate the usability of the framework, and the preliminary results that have been analyzed are showing very promising results

    Enforcing quality of service within web services communities

    Get PDF
    Web services are considered as an attracting distributed approach of application/services integration over the Internet. As the number of Web Services is exponentially growing and expected to do so for the next decade, the need for categorizing and/or classifying Web Services is very crucial for their success and the success of the underlying Service Oriented architecture (SOA). Categorization aims at systematizing Web Services according to their functionalities and their Quality of Service attributes. Communities of Web Services have been used to gather Web Services based on their functionalities. In fact, Web Services in a community can offer similar and/or complementary services. In this paper, we expand Web Services communities\u27 classification by adding a new support layer for Quality of Service classification. This is done through Quality of Services specification, monitoring, and adaptation of Web Services within communities. A Web Service might be admitted to a community thanks to its high Quality of Service or might be ejected from a community due to its low Quality of Service. The focus of this paper is on the design and use of a managerial community to monitor and adapt Quality of Web Services (QoWS) of managerial Web Services for other communities, Web Services providers, and Web Services clients

    A cooperative approach for QoS-aware web services\u27 selection

    Get PDF
    As Web services are becoming omnipresent on the Web, quality of Web services (QoWS) support and management is becoming a hot research topic. Several frameworks for Web service selection have been proposed to support clients in selecting suitable Web services. They are very often based on a middle-tier component to make the selection decision. These solutions suffer very often from scalability problems. To deal with this issue, we propose in this paper a new architecture for Web service selection based on a federation of cooperative brokers. Each broker of the federation manages the Web services within its domain and cooperates with its peers by exchanging information about Web services, such as reputation and QoS, to better serve client requests. We have developed a prototype of the architecture and we have conducted experiments using three broker selection policies mainly “random”, “round-robin”, and “cooperative brokers” to evaluate the degree of fulfillment of clients’ requests. Preliminary results show that the best results are obtained by using the cooperative brokers’ policy

    Multi-tier framework for management of web services\u27 quality

    Get PDF
    Web Services are new breed of applications that endorsed large support from main vendors from industry as well as academia. As the Web Services paradigm becomes more mature, its management is crucial to its adoption and success. Existing approaches are often limited to the platforms under which management features are provided. In this chapter, we propose an approach to provide a unique central console for management of both functional and nonfunctional aspects of Web Services. In fact, we aim at the development of a framework to provide management features to providers and clients by supporting management activities all along the lifecycle. The framework allows/forces providers to consider management activities while developing their Web Services. It allows clients to select appropriate Web Services using different criteria (name, quality). Clients make also use of the framework to check if the Web Services, they are actually using or planning to use, are behaving correctly. We evaluate the Web Services management features of our framework using a composite Web Service

    Towards an adaptive QoS-driven monitoring of cloud SaaS

    No full text
    The abundance of services offered by cloud providers raises the need for Quality of Service (QoS) monitoring to enforce Service Level Agreements (SLAs). In this paper, we propose a cloud-monitoring scheme, which offers flexible and dynamically reconfigurable QoS monitoring services to adapt to various cloud based service characteristics. We propose to grant cloud service consumers the option to decide whether to switch to another service provider, if the current provider frequently violates the corresponding SLA contract. For cloud service providers, the monitoring scheme provides dashboard-like indicators to continuously manage the performance of their SaaS platform and visualise the monitoring data. Our experimental evaluation involves a real cloud platform to illustrate the capability of our monitoring scheme in detecting and reporting violations of SLAs for both single and composite SaaS services. A particular emphasis on visualisation options is highlighted when revealing and reconfiguring monitoring data in a user-friendly display
    corecore